在对比的表示学习中,训练数据表示,使得即使在通过增强的图像改变时,它也可以对图像实例进行分类。然而,根据数据集,一些增强可以损坏超出识别的图像的信息,并且这种增强可以导致折叠表示。我们通过将随机编码过程正式化,其中通过增强的数据损坏与由编码器保留的信息之间存在脱疣,对该问题提出了部分解决方案。我们展示了基于此框架的InfoMax目标,我们可以学习增强的数据依赖分布,以避免表示的崩溃。
translated by 谷歌翻译
分销(OOD)泛化问题的目标是培训推广所有环境的预测因子。此字段中的流行方法使用这样的假设,即这种预测器应为\ Texit {不变预测器},该{不变预测仪}捕获跨环境仍然不变的机制。虽然这些方法在各种案例研究中进行了实验成功,但仍然有很多关于这一假设的理论验证的空间。本文介绍了一系列不变预测因素所必需的一系列理论条件,以实现ood最优性。我们的理论不仅适用于非线性案例,还概括了\ CiteT {Rojas2018Invariant}中使用的必要条件。我们还从我们的理论中得出渐变对齐算法,并展示了\ Citet {Aubinlinear}提出的三个\ Texit {不变性单元测试}中的两种竞争力。
translated by 谷歌翻译
The purpose of this study is to introduce new design-criteria for next-generation hyperparameter optimization software. The criteria we propose include (1) define-by-run API that allows users to construct the parameter search space dynamically, (2) efficient implementation of both searching and pruning strategies, and (3) easy-to-setup, versatile architecture that can be deployed for various purposes, ranging from scalable distributed computing to light-weight experiment conducted via interactive interface. In order to prove our point, we will introduce Optuna, an optimization software which is a culmination of our effort in the development of a next generation optimization software. As an optimization software designed with define-by-run principle, Optuna is particularly the first of its kind. We will present the design-techniques that became necessary in the development of the software that meets the above criteria, and demonstrate the power of our new design through experimental results and real world applications. Our software is available under the MIT license (https://github.com/pfnet/optuna/).
translated by 谷歌翻译
One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_ projection.
translated by 谷歌翻译
We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (Im-ageNet) 1000-class image dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator. We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_projection.
translated by 谷歌翻译
We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only "virtually" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward-and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.
translated by 谷歌翻译
To simulate bosons on a qubit- or qudit-based quantum computer, one has to regularize the theory by truncating infinite-dimensional local Hilbert spaces to finite dimensions. In the search for practical quantum applications, it is important to know how big the truncation errors can be. In general, it is not easy to estimate errors unless we have a good quantum computer. In this paper we show that traditional sampling methods on classical devices, specifically Markov Chain Monte Carlo, can address this issue with a reasonable amount of computational resources available today. As a demonstration, we apply this idea to the scalar field theory on a two-dimensional lattice, with a size that goes beyond what is achievable using exact diagonalization methods. This method can be used to estimate the resources needed for realistic quantum simulations of bosonic theories, and also, to check the validity of the results of the corresponding quantum simulations.
translated by 谷歌翻译
Climate change is becoming one of the greatest challenges to the sustainable development of modern society. Renewable energies with low density greatly complicate the online optimization and control processes, where modern advanced computational technologies, specifically quantum computing, have significant potential to help. In this paper, we discuss applications of quantum computing algorithms toward state-of-the-art smart grid problems. We suggest potential, exponential quantum speedup by the use of the Harrow-Hassidim-Lloyd (HHL) algorithms for sparse matrix inversions in power-flow problems. However, practical implementations of the algorithm are limited by the noise of quantum circuits, the hardness of realizations of quantum random access memories (QRAM), and the depth of the required quantum circuits. We benchmark the hardware and software requirements from the state-of-the-art power-flow algorithms, including QRAM requirements from hybrid phonon-transmon systems, and explicit gate counting used in HHL for explicit realizations. We also develop near-term algorithms of power flow by variational quantum circuits and implement real experiments for 6 qubits with a truncated version of power flows.
translated by 谷歌翻译
在一系列软物质系统中广泛观察到玻璃过渡。但是,尽管有多年的研究,这些转变的物理机制仍然未知。特别是,一个重要的未解决的问题是玻璃转变是否伴随着特征静态结构的相关长度的分歧。最近,提出了一种可以从纯精度的纯静态信息中预测长期动态的方法。但是,即使是这种方法也不通用,并且对于KOB(Andersen系统)而言,这是典型的玻璃形成液体模型。在这项研究中,我们开发了一种使用机器学习或尤其是卷积神经网络提取眼镜的特征结构的方法。特别是,我们通过量化网络做出的决策的理由来提取特征结构。我们考虑了两个质量不同的玻璃形成二进制系统,并通过与几个既定结构指标进行比较,我们证明我们的系统可以识别依赖于系统细节的特征结构。令人惊讶的是,提取的结构与热波动中的非平衡衰老动力学密切相关。
translated by 谷歌翻译
捍卫深层神经网络免受对抗性示例是AI安全的关键挑战。为了有效地提高鲁棒性,最近的方法集中在对抗训练中的决策边界附近的重要数据点上。但是,这些方法容易受到自动攻击的影响,这是无参数攻击的合奏,可用于可靠评估。在本文中,我们通过实验研究了其脆弱性的原因,发现现有方法会减少真实标签和其他标签的逻辑之间的利润,同时保持其梯度规范非微小值。减少的边缘和非微小梯度规范会导致其脆弱性,因为最大的logit可以轻松地被扰动翻转。我们的实验还表明,logit边缘的直方图具有两个峰,即小和大的logit边缘。从观察结果来看,我们提出了切换单重损失(SOVR),当数据具有较小的logit rumgins时,它会使用单重损失,从而增加边缘。我们发现,SOVR比现有方法增加了logit的利润率,同时使梯度规范保持较小,并且在针对自动攻击的鲁棒性方面超越了它们。
translated by 谷歌翻译